41 research outputs found
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
Modern recommender systems model people and items by discovering or `teasing
apart' the underlying dimensions that encode the properties of items and users'
preferences toward them. Critically, such dimensions are uncovered based on
user feedback, often in implicit form (such as purchase histories, browsing
logs, etc.); in addition, some recommender systems make use of side
information, such as product attributes, temporal information, or review text.
However one important feature that is typically ignored by existing
personalized recommendation and ranking methods is the visual appearance of the
items being considered. In this paper we propose a scalable factorization model
to incorporate visual signals into predictors of people's opinions, which we
apply to a selection of large, real-world datasets. We make use of visual
features extracted from product images using (pre-trained) deep networks, on
top of which we learn an additional layer that uncovers the visual dimensions
that best explain the variation in people's feedback. This not only leads to
significantly more accurate personalized ranking methods, but also helps to
alleviate cold start issues, and qualitatively to analyze the visual dimensions
that influence people's opinions.Comment: AAAI'1
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
Recent advancements in deep neural networks for graph-structured data have
led to state-of-the-art performance on recommender system benchmarks. However,
making these methods practical and scalable to web-scale recommendation tasks
with billions of items and hundreds of millions of users remains a challenge.
Here we describe a large-scale deep recommendation engine that we developed and
deployed at Pinterest. We develop a data-efficient Graph Convolutional Network
(GCN) algorithm PinSage, which combines efficient random walks and graph
convolutions to generate embeddings of nodes (i.e., items) that incorporate
both graph structure as well as node feature information. Compared to prior GCN
approaches, we develop a novel method based on highly efficient random walks to
structure the convolutions and design a novel training strategy that relies on
harder-and-harder training examples to improve robustness and convergence of
the model. We also develop an efficient MapReduce model inference algorithm to
generate embeddings using a trained model. We deploy PinSage at Pinterest and
train it on 7.5 billion examples on a graph with 3 billion nodes representing
pins and boards, and 18 billion edges. According to offline metrics, user
studies and A/B tests, PinSage generates higher-quality recommendations than
comparable deep learning and graph-based alternatives. To our knowledge, this
is the largest application of deep graph embeddings to date and paves the way
for a new generation of web-scale recommender systems based on graph
convolutional architectures.Comment: KDD 201
Indexable Bayesian personalized ranking for efficient top-k recommendation
Singapore National Research FoundationNational Research Foundation (NRF) Singapor
xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems
Combinatorial features are essential for the success of many commercial
models. Manually crafting these features usually comes with high cost due to
the variety, volume and velocity of raw data in web-scale systems.
Factorization based models, which measure interactions in terms of vector
product, can learn patterns of combinatorial features automatically and
generalize to unseen features as well. With the great success of deep neural
networks (DNNs) in various fields, recently researchers have proposed several
DNN-based factorization model to learn both low- and high-order feature
interactions. Despite the powerful ability of learning an arbitrary function
from data, plain DNNs generate feature interactions implicitly and at the
bit-wise level. In this paper, we propose a novel Compressed Interaction
Network (CIN), which aims to generate feature interactions in an explicit
fashion and at the vector-wise level. We show that the CIN share some
functionalities with convolutional neural networks (CNNs) and recurrent neural
networks (RNNs). We further combine a CIN and a classical DNN into one unified
model, and named this new model eXtreme Deep Factorization Machine (xDeepFM).
On one hand, the xDeepFM is able to learn certain bounded-degree feature
interactions explicitly; on the other hand, it can learn arbitrary low- and
high-order feature interactions implicitly. We conduct comprehensive
experiments on three real-world datasets. Our results demonstrate that xDeepFM
outperforms state-of-the-art models. We have released the source code of
xDeepFM at \url{https://github.com/Leavingseason/xDeepFM}.Comment: 10 page
Learning from History and Present: Next-item Recommendation via Discriminatively Exploiting User Behaviors
In the modern e-commerce, the behaviors of customers contain rich
information, e.g., consumption habits, the dynamics of preferences. Recently,
session-based recommendations are becoming popular to explore the temporal
characteristics of customers' interactive behaviors. However, existing works
mainly exploit the short-term behaviors without fully taking the customers'
long-term stable preferences and evolutions into account. In this paper, we
propose a novel Behavior-Intensive Neural Network (BINN) for next-item
recommendation by incorporating both users' historical stable preferences and
present consumption motivations. Specifically, BINN contains two main
components, i.e., Neural Item Embedding, and Discriminative Behaviors Learning.
Firstly, a novel item embedding method based on user interactions is developed
for obtaining an unified representation for each item. Then, with the embedded
items and the interactive behaviors over item sequences, BINN discriminatively
learns the historical preferences and present motivations of the target users.
Thus, BINN could better perform recommendations of the next items for the
target users. Finally, for evaluating the performances of BINN, we conduct
extensive experiments on two real-world datasets, i.e., Tianchi and JD. The
experimental results clearly demonstrate the effectiveness of BINN compared
with several state-of-the-art methods.Comment: 10 pages, 7 figures, KDD 201